Influence Patterns for Explaining Information Flow in BERT. (arXiv:2011.00740v2 [cs.CL] UPDATED)
(2 min)
While attention is all you need may be proving true, we do not know why:
attention-based transformer models such as BERT are superior but how
information flows from input tokens to output predictions are unclear. We
introduce influence patterns, abstractions of sets of paths through a
transformer model. Patterns quantify and localize the flow of information to
paths passing through a sequence of model nodes. Experimentally, we find that
significant portion of information flow in BERT goes through skip connections
instead of attention heads. We further show that consistency of patterns across
instances is an indicator of BERT's performance. Finally, We demonstrate that
patterns account for far more model performance than previous attention-based
and layer-based methods.